AAAI.2020 - Heuristic Search and Optimization

Total: 15

#1 A Unifying View on Individual Bounds and Heuristic Inaccuracies in Bidirectional Search [PDF] [Copy] [Kimi]

Authors: Vidal Alcázar ; Pat Riddle ; Mike Barley

In the past few years, new very successful bidirectional heuristic search algorithms have been proposed. Their key novelty is a lower bound on the cost of a solution that includes information from the g values in both directions. Kaindl and Kainz (1997) proposed measuring how inaccurate a heuristic is while expanding nodes in the opposite direction, and using this information to raise the f value of the evaluated nodes. However, this comes with a set of disadvantages and remains yet to be exploited to its full potential. Additionally, Sadhukhan (2013) presented BAE∗, a bidirectional best-first search algorithm based on the accumulated heuristic inaccuracy along a path. However, no complete comparison in regards to other bidirectional algorithms has yet been done, neither theoretical nor empirical. In this paper we define individual bounds within the lower-bound framework and show how both Kaindl and Kainz's and Sadhukhan's methods can be generalized thus creating new bounds. This overcomes previous shortcomings and allows newer algorithms to benefit from these techniques as well. Experimental results show a substantial improvement, up to an order of magnitude in the number of necessarily-expanded nodes compared to state-of-the-art near-optimal algorithms in common benchmarks.

#2 An Interactive Regret-Based Genetic Algorithm for Solving Multi-Objective Combinatorial Optimization Problems [PDF] [Copy] [Kimi]

Authors: Nawal Benabbou ; Cassandre Leroy ; Thibaut Lust

We propose a new approach consisting in combining genetic algorithms and regret-based incremental preference elicitation for solving multi-objective combinatorial optimization problems with unknown preferences. For the purpose of elicitation, we assume that the decision maker's preferences can be represented by a parameterized scalarizing function but the parameters are initially not known. Instead, the parameter imprecision is progressively reduced by asking preference queries to the decision maker during the search to help identify the best solutions within a population. Our algorithm, called RIGA, can be applied to any multi-objective combinatorial optimization problem provided that the scalarizing function is linear in its parameters and that a (near-)optimal solution can be efficiently determined when preferences are known. Moreover, RIGA runs in polynomial time while asking no more than a polynomial number of queries. For the multi-objective traveling salesman problem, we provide numerical results showing its practical efficiency in terms of number of queries, computation time and gap to optimality.

#3 Local Search with Dynamic-Threshold Configuration Checking and Incremental Neighborhood Updating for Maximum k-plex Problem [PDF] [Copy] [Kimi]

Authors: Peilin Chen ; Hai Wan ; Shaowei Cai ; Jia Li ; Haicheng Chen

The Maximum k-plex Problem is an important combinatorial optimization problem with increasingly wide applications. In this paper, we propose a novel strategy, named Dynamic-threshold Configuration Checking (DCC), to reduce the cycling problem of local search. Due to the complicated neighborhood relations, all the previous local search algorithms for this problem spend a large amount of time in identifying feasible neighbors in each step. To further improve the performance on dense and challenging instances, we propose Double-attributes Incremental Neighborhood Updating (DINU) scheme which reduces the worst-case time complexity per iteration from O(|V|⋅ΔG) to O(k · Δ‾G). Based on DCC strategy and DINU scheme, we develop a local search algorithm named DCCplex. According to the experiment result, DCCplex shows promising result on DIMACS and BHOSLIB benchmark as well as real-world massive graphs. Especially, DCCplex updates the lower bound of the maximum k-plex for most dense and challenging instances.

#4 Envelope-Based Approaches to Real-Time Heuristic Search [PDF] [Copy] [Kimi]

Authors: Kevin Gall ; Bence Cserna ; Wheeler Ruml

In real-time heuristic search, the planner must return the next action for the agent within a pre-specified time bound. Many algorithms for this setting are ‘agent-centered’ in that, at every iteration, they only expand states near the agent's current state, discarding the search frontier afterwards. In this paper, we investigate the alternative paradigm in which the search expands a single ever-growing envelope of states. Previous work on envelope-based methods restricts the agent to move along the generated search tree. We propose a more flexible approach in which an auxiliary search is performed within the envelope to guide the agent toward a promising frontier node. Experimental results indicate that intra-envelope search is beneficial in state spaces that are highly interconnected, such as those for grid pathfinding.

#5 Runtime Analysis of Somatic Contiguous Hypermutation Operators in MOEA/D Framework [PDF] [Copy] [Kimi]

Authors: Zhengxin Huang ; Yuren Zhou

Somatic contiguous hypermutation (CHM) operators are important variation operators in artificial immune systems. The few existing theoretical studies are only concerned with understanding the optimization behavior of CHM operators on solving single-objective optimization problems. The MOEA/D framework is one of the most popular strategies for solving multi-objective optimization problems (MOPs). In this paper, we present a runtime analysis of using two CHM operators in MOEA/D framework for solving five benchmark MOPs, including four bi-objective and one many-objective problems. Our analyses show that the expected runtimes of CHM operators on the four bi-objective problems are better than or as good as that of the well-studied standard bit mutation operator. Moreover, using CHM operators in MOEA/D framework can improve the best known upper bound on the many-objective problem by a factor of n. This paper provides insight into understanding the optimization behavior of CHM operators in the well-known MOEA/D framework, and indicates that using the CHM operator in MOEA/D framework is a promising method for handling MOPs.

#6 Learning to Optimize Variational Quantum Circuits to Solve Combinatorial Problems [PDF] [Copy] [Kimi]

Authors: Sami Khairy ; Ruslan Shaydulin ; Lukasz Cincio ; Yuri Alexeev ; Prasanna Balaprakash

Quantum computing is a computational paradigm with the potential to outperform classical methods for a variety of problems. Proposed recently, the Quantum Approximate Optimization Algorithm (QAOA) is considered as one of the leading candidates for demonstrating quantum advantage in the near term. QAOA is a variational hybrid quantum-classical algorithm for approximately solving combinatorial optimization problems. The quality of the solution obtained by QAOA for a given problem instance depends on the performance of the classical optimizer used to optimize the variational parameters. In this paper, we formulate the problem of finding optimal QAOA parameters as a learning task in which the knowledge gained from solving training instances can be leveraged to find high-quality solutions for unseen test instances. To this end, we develop two machine-learning-based approaches. Our first approach adopts a reinforcement learning (RL) framework to learn a policy network to optimize QAOA circuits. Our second approach adopts a kernel density estimation (KDE) technique to learn a generative model of optimal QAOA parameters. In both approaches, the training procedure is performed on small-sized problem instances that can be simulated on a classical computer; yet the learned RL policy and the generative model can be used to efficiently solve larger problems. Extensive simulations using the IBM Qiskit Aer quantum circuit simulator demonstrate that our proposed RL- and KDE-based approaches reduce the optimality gap by factors up to 30.15 when compared with other commonly used off-the-shelf optimizers.

#7 How the Duration of the Learning Period Affects the Performance of Random Gradient Selection Hyper-Heuristics [PDF] [Copy] [Kimi]

Authors: Andrei Lissovoi ; Pietro Oliveto ; John Alasdair Warwicker

Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LeadingOnes in the best expected time achievable with the available heuristics, if sufficiently long learning periods τ are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: Ridge, where the HH has to learn that RLS1 is always the best low-level heuristic, and OneMax, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods τ are required for the HH to achieve optimal expected runtime for Ridge. Conversely, a sub-logarithmic learning period is the best static choice for OneMax, while using super-linear values for τ increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both OneMax and Ridge. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.

#8 On Performance Estimation in Automatic Algorithm Configuration [PDF] [Copy] [Kimi]

Authors: Shengcai Liu ; Ke Tang ; Yunwei Lei ; Xin Yao

Over the last decade, research on automated parameter tuning, often referred to as automatic algorithm configuration (AAC), has made significant progress. Although the usefulness of such tools has been widely recognized in real world applications, the theoretical foundations of AAC are still very weak. This paper addresses this gap by studying the performance estimation problem in AAC. More specifically, this paper first proves the universal best performance estimator in a practical setting, and then establishes theoretical bounds on the estimation error, i.e., the difference between the training performance and the true performance for a parameter configuration, considering finite and infinite configuration spaces respectively. These findings were verified in extensive experiments conducted on four algorithm configuration scenarios involving different problem domains. Moreover, insights for enhancing existing AAC methods are also identified.

#9 A Learning Based Branch and Bound for Maximum Common Subgraph Related Problems [PDF] [Copy] [Kimi]

Authors: Yanli Liu ; Chu-Min Li ; Hua Jiang ; Kun He

The performance of a branch-and-bound (BnB) algorithm for maximum common subgraph (MCS) problem and its related problems, like maximum common connected subgraph (MCCS) and induced Subgraph Isomorphism (SI), crucially depends on the branching heuristic. We propose a branching heuristic inspired from reinforcement learning with a goal of reaching a tree leaf as early as possible to greatly reduce the search tree size. Experimental results show that the proposed heuristic consistently and significantly improves the current best BnB algorithm for the MCS, MCCS and SI problems. An analysis is carried out to give insight on why and how reinforcement learning is useful in the new branching heuristic.

#10 Cakewalk Sampling [PDF] [Copy] [Kimi]

Authors: Uri Patish ; Shimon Ullman

We study the task of finding good local optima in combinatorial optimization problems. Although combinatorial optimization is NP-hard in general, locally optimal solutions are frequently used in practice. Local search methods however typically converge to a limited set of optima that depend on their initialization. Sampling methods on the other hand can access any valid solution, and thus can be used either directly or alongside methods of the former type as a way for finding good local optima. Since the effectiveness of this strategy depends on the sampling distribution, we derive a robust learning algorithm that adapts sampling distributions towards good local optima of arbitrary objective functions. As a first use case, we empirically study the efficiency in which sampling methods can recover locally maximal cliques in undirected graphs. Not only do we show how our adaptive sampler outperforms related methods, we also show how it can even approach the performance of established clique algorithms. As a second use case, we consider how greedy algorithms can be combined with our adaptive sampler, and we demonstrate how this leads to superior performance in k-medoid clustering. Together, these findings suggest that our adaptive sampler can provide an effective strategy to combinatorial optimization problems that arise in practice.

#11 Subset Selection by Pareto Optimization with Recombination [PDF] [Copy] [Kimi]

Authors: Chao Qian ; Chao Bian ; Chao Feng

Subset selection, i.e., to select a limited number of items optimizing some given objective function, is a fundamental problem with various applications such as unsupervised feature selection and sparse regression. By employing a multi-objective evolutionary algorithm (EA) with mutation only to optimize the given objective function and minimize the number of selected items simultaneously, the recently proposed POSS algorithm achieves state-of-the-art performance for subset selection. In this paper, we propose the PORSS algorithm by incorporating recombination, a characterizing feature of EAs, into POSS. We prove that PORSS can achieve the optimal polynomial-time approximation guarantee as POSS when the objective function is monotone, and can find an optimal solution efficiently in some cases whereas POSS cannot. Extensive experiments on unsupervised feature selection and sparse regression show the superiority of PORSS over POSS. Our analysis also theoretically discloses that recombination from diverse solutions can be more likely than mutation alone to generate various variations, thereby leading to better exploration; this may be of independent interest for understanding the influence of recombination.

#12 Asymptotic Risk of Bézier Simplex Fitting [PDF] [Copy] [Kimi]

Authors: Akinori Tanaka ; Akiyoshi Sannai ; Ken Kobayashi ; Naoki Hamada

The B'ezier simplex fitting is a novel data modeling technique which utilizes geometric structures of data to approximate the Pareto set of multi-objective optimization problems. There are two fitting methods based on different sampling strategies. The inductive skeleton fitting employs a stratified subsampling from skeletons of a simplex, whereas the all-at-once fitting uses a non-stratified sampling which treats a simplex as a single object. In this paper, we analyze the asymptotic risks of those B'ezier simplex fitting methods and derive the optimal subsample ratio for the inductive skeleton fitting. It is shown that the inductive skeleton fitting with the optimal ratio has a smaller risk when the degree of a B'ezier simplex is less than three. Those results are verified numerically under small to moderate sample sizes. In addition, we provide two complementary applications of our theory: a generalized location problem and a multi-objective hyper-parameter tuning of the group lasso. The former can be represented by a B'ezier simplex of degree two where the inductive skeleton fitting outperforms. The latter can be represented by a B'ezier simplex of degree three where the all-at-once fitting gets an advantage.

#13 Trading Convergence Rate with Computational Budget in High Dimensional Bayesian Optimization [PDF] [Copy] [Kimi]

Authors: Hung Tran-The ; Sunil Gupta ; Santu Rana ; Svetha Venkatesh

Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research problems particularly when no assumptions are made on function structure. The main reason is that at each iteration, BO requires to find global maximisation of acquisition function, which itself is a non-convex optimization problem in the original search space. With growing dimensions, the computational budget for this maximisation gets increasingly short leading to inaccurate solution of the maximisation. This inaccuracy adversely affects both the convergence and the efficiency of BO. We propose a novel approach where the acquisition function only requires maximisation on a discrete set of low dimensional subspaces embedded in the original high-dimensional search space. Our method is free of any low dimensional structure assumption on the function unlike many recent high-dimensional BO methods. Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget. We show that in spite of this convenience, our algorithm remains convergent. In particular, cumulative regret of our algorithm only grows sub-linearly with the number of iterations. More importantly, as evident from our regret bounds, our algorithm provides a way to trade the convergence rate with the number of subspaces used in the optimisation. Finally, when the number of subspaces is "sufficiently large", our algorithm's cumulative regret is at most O*(√TγT) as opposed to O*(√DTγT) for the GP-UCB of Srinivas et al. (2012), reducing a crucial factor √D where D being the dimensional number of input space. We perform empirical experiments to evaluate our method extensively, showing that its sample efficiency is better than the existing methods for many optimisation problems involving dimensions up to 5000.

#14 Reduction and Local Search for Weighted Graph Coloring Problem [PDF] [Copy] [Kimi]

Authors: Yiyuan Wang ; Shaowei Cai ; Shiwei Pan ; Ximing Li ; Monghao Yin

The weighted graph coloring problem (WGCP) is an important extension of the graph coloring problem (GCP) with wide applications. Compared to GCP, where numerous methods have been developed and even massive graphs with millions of vertices can be solved well, fewer works have been done for WGCP, and no solution is available for solving WGCP for massive graphs. This paper explores techniques for solving WGCP, including a lower bound and a reduction rule based on clique sampling, and a local search algorithm based on two selection rules and a new variant of configuration checking. This results in our algorithm RedLS (Reduction plus Local Search). Experiments are conducted to compare RedLS with the state-of-the-art algorithms on massive graphs as well as conventional benchmarks studied in previous works. RedLS exhibits very good performance and robustness. It significantly outperforms previous algorithms on all benchmarks.

#15 Enumerating Maximal <em>k</em>-Plexes with Worst-Case Time Guarantee [PDF] [Copy] [Kimi]

Authors: Yi Zhou ; Jingwei Xu ; Zhenyu Guo ; Mingyu Xiao ; Yan Jin

The problem of enumerating all maximal cliques in a graph is a key primitive in a variety of real-world applications such as community detection and so on. However, in practice, communities are rarely formed as cliques due to data noise. Hence, k-plex, a subgraph in which any vertex is adjacent to all but at most k vertices, is introduced as a relaxation of clique. In this paper, we investigate the problem of enumerating all maximal k-plexes and present FaPlexen, an enumeration algorithm which integrates the “pivot” heuristic and new branching schemes. To our best knowledge, for the first time, FaPlexen lists all maximal k-plexes with provably worst-case running time O(n2γn) in a graph with n vertices, where γ < 2. Then, we propose another algorithm CommuPlex which non-trivially extends FaPlexen to find all maximal k-plexes of prescribed size for community detection in massive real-life networks. We finally carry out experiments on both real and synthetic graphs and demonstrate that our algorithms run much faster than the state-of-the-art algorithms.